Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2021/09.06.19.58
%2 sid.inpe.br/sibgrapi/2021/09.06.19.58.55
%@doi 10.1109/SIBGRAPI54419.2021.00038
%T Semi-supervised siamese network using self-supervision under scarce annotation improves class separability and robustness to attack
%D 2021
%A Cavallari, Gabriel,
%A Ponti, Moacir,
%@affiliation Universidade de São Paulo 
%@affiliation Universidade de São Paulo
%E Paiva, Afonso ,
%E Menotti, David ,
%E Baranoski, Gladimir V. G. ,
%E Proença, Hugo Pedro ,
%E Junior, Antonio Lopes Apolinario ,
%E Papa, João Paulo ,
%E Pagliosa, Paulo ,
%E dos Santos, Thiago Oliveira ,
%E e Sá, Asla Medeiros ,
%E da Silveira, Thiago Lopes Trugillo ,
%E Brazil, Emilio Vital ,
%E Ponti, Moacir A. ,
%E Fernandes, Leandro A. F. ,
%E Avila, Sandra,
%B Conference on Graphics, Patterns and Images, 34 (SIBGRAPI)
%C Gramado, RS, Brazil (virtual)
%8 18-22 Oct. 2021
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K deep learning, attack, self-supervision, self-supervised learning.
%X Self-supervised learning approaches were shown to benefit feature learning by training models under a pretext task. In this context, learning from limited data can be tackled using a combination of semi-supervised learning and self-supervision. In this paper we combine the traditional supervised learning paradigm with the rotation prediction self-supervised task, that are used simultaneously to train a siamese model with a joint loss function and shared weights. In particular, we are interested in the case in which the proportion of labeled with respect to unlabeled data is small. We investigate the effectiveness of a compact feature space obtained after training under such limited annotation scenario, in terms of linear class separability and under attack. The study includes images from multiple domains, such as natural images (STL-10 dataset), products (Fashion-MNIST dataset) and biomedical images (Malaria dataset). We show that in scenarios where we have only a few labeled data the model augmented with a self-supervised task can take advantage of the unlabeled data to improve the learned representation in terms of the linear discrimination, as well as allowing learning even under attack. Also, we discuss the choices in terms of self-supervision and cases of failure considering the different datasets.
%@language en
%3 81.pdf


Close